62 resultados para CHD Prediction, Blood Serum Data Chemometrics Methods

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lung cancer is a leading cause of cancer-related death worldwide. The early diagnosis of cancer has demonstrated to be greatly helpful for curing the disease effectively. Microarray technology provides a promising approach of exploiting gene profiles for cancer diagnosis. In this study, the authors propose a gene expression programming (GEP)-based model to predict lung cancer from microarray data. The authors use two gene selection methods to extract the significant lung cancer related genes, and accordingly propose different GEP-based prediction models. Prediction performance evaluations and comparisons between the authors' GEP models and three representative machine learning methods, support vector machine, multi-layer perceptron and radial basis function neural network, were conducted thoroughly on real microarray lung cancer datasets. Reliability was assessed by the cross-data set validation. The experimental results show that the GEP model using fewer feature genes outperformed other models in terms of accuracy, sensitivity, specificity and area under the receiver operating characteristic curve. It is concluded that GEP model is a better solution to lung cancer prediction problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New communications technologies often allow new ways of conducting market research. Determining the advantages of a new data collection method over established alternatives is difficult without thorough comparative testing. Computer-mediated marketing research is one such example of a new technology that has been enthusiastically embraced by marketing organisations and those servicing them. While researchers using the Internet (Net) and World Wide Web (Web) in its early years reported benefits such as high response levels, there is little in the way of comparative evidence to support any claimed advantages. This paper reports on the outcomes of three separate studies in which members (subscribers) of various organisations have been surveyed using both postal and online (email invitation and HTML Web form) data collection methods. The conclusion here is that it would be unwise to assume that one method can be directly substituted for another and obtain the same response. Differences in both the response pattern and demographic profile of respondents between the groups are consistently noticed, such as to warrant further examination of the methods used in online marketing research, and to suggest the need for further study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer-mediated marketing research has been enthusiastically embraced by marketing organisations and those servicing them, for many reasons. While researchers using the Internet (Net) and World Wide Web (Web) in its early years reported benefits such as high response levels, there are now issues in this regard. This paper reports on the outcomes of a probabilistic study involving football club members (subscribers) involving both postal and online (e-mail invitation and HTML Web form) data collection methods. The paper reports differences in both the response pattern and demographic profile of respondents between the groups such as to warrant further examination of the methods used in online marketing research, and to suggest the need for further study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The pharmacokinetic profile of a drug often gives little indication of its potential therapeutic application, with many therapeutic uses of drugs being discovered serendipitously while being studied for different indications. As hypothesis-driven, quantitative research methodology is exclusively used in early-phase trials, unexpected but important phenomena may escape detection. In this context, this study aimed to examine the potential for integrating qualitative research methods with quantitative methods in early-phase drug trials. To our knowledge, this mixed methodology has not previously been applied to blinded psychopharmacologic trials.

Method: We undertook qualitative data analysis of clinical observations on the dataset of a randomized, double-blind, placebo-controlled trial of N-acetylcysteine (NAC) in patients with DSM-IV-TR–diagnosed schizophrenia (N = 140). Textual data on all participants, deliberately collected for this purpose, were coded using NVivo 2, and emergent themes were analyzed in a blinded manner in the NAC and placebo groups. The trial was conducted from November 2002 to July 2005.

Results: The principal findings of the published trial could be replicated using a qualitative methodology. In addition, significant differences between NAC- and placebo-treated participants emerged for positive and affective symptoms, which had not been captured by the rating scales utilized in the quantitative trial. Qualitative data in this study subsequently led to a positive trial of NAC in bipolar disorder.

Conclusions: The use of qualitative methods may yield broader data and has the potential to complement traditional quantitative methods and detect unexpected efficacy and safety signals, thereby maximizing the findings of early-phase clinical trial research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The calculation of the first few moments of elution peaks is necessary to determine: the amount of component in the sample (peak area or zeroth moment), the retention factor (first moment), and the column efficiency (second moment). It is a time consuming and tedious task for the analyst to perform these calculations, thus data analysis is generally completed by data stations associated to modern chromatographs. However, data acquisition software is a black box which provides no information to chromatographers on how their data are treated. These results are too important to be accepted on blind faith. The location of the peak integration boundaries is most important. In this manuscript, we explore the relationships between the size of the integration area, the relative position of the peak maximum within this area, and the accuracy of the calculated moments. We found that relationships between these parameters do exist and that computers can be programmed with relatively simple routines to automatize the extraction of key peak parameters and to select acceptable integration boundaries. It was also found that the most accurate results are obtained when the S/N exceeds 200.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As one of the primary substances in a living organism, protein defines the character of each cell by interacting with the cellular environment to promote the cell’s growth and function [1]. Previous studies on proteomics indicate that the functions of different proteins could be assigned based upon protein structures [2,3]. The knowledge on protein structures gives us an overview of protein fold space and is helpful for the understanding of the evolutionary principles behind structure. By observing the architectures and topologies of the protein families, biological processes can be investigated more directly with much higher resolution and finer detail. For this reason, the analysis of protein, its structure and the interaction with the other materials is emerging as an important problem in bioinformatics. However, the determination of protein structures is experimentally expensive and time consuming, this makes scientists largely dependent on sequence rather than more general structure to infer the function of the protein at the present time. For this reason, data mining technology is introduced into this area to provide more efficient data processing and knowledge discovery approaches.

Unlike many data mining applications which lack available data, the protein structure determination problem and its interaction study, on the contrary, could utilize a vast amount of biologically relevant information on protein and its interaction, such as the protein data bank (PDB) [4], the structural classification of proteins (SCOP) databases [5], CATH databases [6], UniProt [7], and others. The difficulty of predicting protein structures, specially its 3D structures, and the interactions between proteins as shown in Figure 6.1, lies in the computational complexity of the data. Although a large number of approaches have been developed to determine the protein structures such as ab initio modelling [8], homology modelling [9] and threading [10], more efficient and reliable methods are still greatly needed.

In this chapter, we will introduce a state-of-the-art data mining technique, graph mining, which is good at defining and discovering interesting structural patterns in graphical data sets, and take advantage of its expressive power to study protein structures, including protein structure prediction and comparison, and protein-protein interaction (PPI). The current graph pattern mining methods will be described, and typical algorithms will be presented, together with their applications in the protein structure analysis.

The rest of the chapter is organized as follows: Section 6.2 will give a brief introduction of the fundamental knowledge of protein, the publicly accessible protein data resources and the current research status of protein analysis; in Section 6.3, we will pay attention to one of the state-of-the-art data mining methods, graph mining; then Section 6.4 surveys several existing work for protein structure analysis using advanced graph mining methods in the recent decade; finally, in Section 6.5, a conclusion with potential further work will be summarized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Whether dietary indexes are associated with biomarkers of children's dietary intake is unclear. OBJECTIVE: The study aim was to examine the relations between diet quality and selected plasma biomarkers of dietary intake and serum lipid profile. METHODS: The study sample consisted of 130 children aged 4-13 y (mean ± SD: 8.6 ± 2.9 y) derived by using baseline data from an intervention study. The Dietary Guideline Index for Children and Adolescents (DGI-CA) comprises the following 11 components with age-specific criteria: 5 core food groups, whole-grain bread, reduced-fat dairy foods, discretionary foods (nutrient poor; high in saturated fat, salt, and added sugar), healthy fats/oils, water, and diet variety (possible score of 100). A higher score reflects greater compliance with dietary guidelines. Venous blood was collected for measurements of serum lipids, fatty acid composition, plasma carotenoids, lutein, lycopene, and α-tocopherol. Linear regression was used to examine the relation between DGI-CA score (independent variable) and concentrations of biomarkers by using the log-transformed variable (outcome), controlling for confounders. RESULTS: DGI-CA score was positively associated (P < 0.05) with plasma concentrations of lutein (standardized β = 0.17), α-carotene (standardized β = 0.28), β-carotene (standardized β = 0.26), and n-3 (ω-3) fatty acids (standardized β = 0.51) and inversely associated with plasma concentrations of lycopene (standardized β = -0.23) and stearic acid (18:0) (standardized β = -0.22). No association was observed between diet quality and α-tocopherol, n-6 fatty acids, or serum lipid profile (all P > 0.05). CONCLUSION: Diet quality, conceptualized as adherence to national dietary guidelines, is cross-sectionally associated with plasma biomarkers of dietary exposure but not serum lipid profile. This trial was registered with the Australia New Zealand Clinical Trial Registry (www.anztr.org.au) as ACTRN12609000453280.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary: The aim of this study was to evaluate a number of foot-and-mouth disease (FMD) test methods for use in red deer. Ten animals were intranasally inoculated with the FMD virus (FMDV) O UKG 11/2001, monitored for clinical signs, and samples taken regularly (blood, serum, oral swabs, nasal swabs, probang samples and lesion swabs, if present) over a 4-week period. Only one animal, deer 1103, developed clinical signs (lesions under the tongue and at the coronary band of the right hind hoof). It tested positive by 3D and IRES real-time reverse transcription polymerase chain reaction (rRT-PCR) in various swabs, lesion materials and serum. In a non-structural protein (NSP) in-house ELISA (NSP-ELISA-IH), one commercial ELISA (NSP-ELISA-PR) and a commercial antibody NSP pen side test, only deer 1103 showed positive results from day post-inoculation (dpi) 14 onwards. Two other NSP-ELISAs detected anti-NSP serum antibodies with lower sensitivity. It also showed rising antibody levels in the virus neutralization test (VNT), the in-house SPO-ELISA-IH and the commercial SPO-ELISA-PR at dpi 9, and in another two commercial SPO-ELISAs at dpi 12 (SPO-ELISA-IV) and dpi 19 (SPO-ELISA-IZ), respectively. Six of the red deer that had been rRT-PCR and antibody negative were re-inoculated intramuscularly with the same O-serotype FMDV at dpi 14. None of these animals became rRT-PCR or NSP-ELISA positive, but all six animals became positive in the VNT, the in-house SPO-ELISA-IH and the commercial SPO-ELISA-PR. Two other commercial SPO-ELISAs were less sensitive or failed to detect animals as positive. The rRT-PCRs and the four most sensitive commercial ELISAs that had been used for the experimentally inoculated deer were further evaluated for diagnostic specificity (DSP) using 950 serum samples and 200 nasal swabs from non-infected animals. DSPs were 100% for the rRT-PCRs and between 99.8 and 100% for the ELISAs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Blood pressure targets in individuals treated for hypertension in primary care remain difficult to attain.

AIMS: To assess the role of practice nurses in facilitating intensive and structured management to achieve ideal BP levels.

METHODS: We analysed outcome data from the Valsartan Intensified Primary carE Reduction of Blood Pressure Study. Patients were randomly allocated (2:1) to the study intervention or usual care. Within both groups, a practice nurse mediated the management of blood pressure for 439 patients with endpoint blood pressure data (n=1492). Patient management was categorised as: standard usual care (n=348, 23.3%); practice nurse-mediated usual care (n=156, 10.5%); standard intervention (n=705, 47.3%) and practice nurse-mediated intervention (n=283, 19.0%). Blood pressure goal attainment at 26-week follow-up was then compared.

RESULTS: Mean age was 59.3±12.0 years and 62% were men. Baseline blood pressure was similar in practice nurse-mediated (usual care or intervention) and standard care management patients (150 ± 16/88 ± 11 vs. 150 ± 17/89 ± 11 mmHg, respectively). Practice nurse-mediated patients had a stricter blood pressure goal of ⩽125/75 mmHg (33.7% vs. 27.3%, p=0.026). Practice nurse-mediated intervention patients achieved the greatest blood pressure falls and the highest level of blood pressure goal attainment (39.2%) compared with standard intervention (35.0%), practice nurse-mediated usual care (32.1%) and standard usual care (25.3%; p<0.001). Practice nurse-mediated intervention patients were almost two-fold more likely to achieve their blood pressure goal compared with standard usual care patients (adjusted odds ratio 1.92, 95% confidence interval 1.32 to 2.78; p=0.001).

CONCLUSION: There is greater potential to achieve blood pressure targets in primary care with practice nurse-mediated hypertension management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years have witnessed a growing interest in context-aware recommender system (CARS), which explores the impact of context factors on personalized Web services recommendation. Basically, the general idea of CARS methods is to mine historical service invocation records through the process of context-aware similarity computation. It is observed that traditional similarity mining process would very likely generate relatively big deviations of QoS values, due to the dynamic change of contexts. As a consequence, including a considerable amount of deviated QoS values in the similarity calculation would probably result in a poor accuracy for predicting unknown QoS values. In allusion to this problem, this paper first distinguishes two definitions of Abnormal Data and True Abnormal Data, the latter of which should be eliminated. Second, we propose a novel CASR-TADE method by incorporating the effectiveness of True Abnormal Data Elimination into context-aware Web services recommendation. Finally, the experimental evaluations on a real-world Web services dataset show that the proposed CASR-TADE method significantly outperforms other existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

From a review of the empirical sales ethics literature, this paper reports findings about some of the research methods used to investigate the decision-making of sales practitioners under ethical conditions. The review identifies that several of the methodological deficiencies raised by previous reviewers of the literature have not been adequately addressed by subsequent researchers. The paper primarily reviews quantitative research studies because of their prevalence in the empirical sales ethics literature, and because studies similar to these have contributed much to marketing ethics theory. This discussion also focuses on sampling and data collection methods, the treatment of respondent and non-response bias, the use of instruments and scales, and the application of the scenario technique. Some suggestions are made that would improve the research methods in each of these areas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the issue of identifying various classes of aggregation operators from empirical data, which also preserves the ordering of the outputs. It is argued that the ordering of the outputs is more important than the numerical values, however the usual data fitting methods are only concerned with fitting the values. We will formulate preservation of the ordering problem as a standard mathematical programming problem, solved by standard numerical methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – This paper seeks to examine two management doctoral research projects to highlight the advantages in mixed methods as the primary research design.
Design/methodology/approach – This paper summarises the methods of data collection and analysis which were used by two doctoral students in their management research. The researchers used mixed methods approaches (quantitative and qualitative) to explore different areas of management.
Findings – The paper supports the view that triangulation of research methods strengthens the findings and inferences made for understanding social phenomena in more depth, compared to using a single method.
Research limitations/implications – The paper relies excessively on two doctoral research projects which utilise sequential mixed methods. Therefore, arguments made in the paper are specific because other doctoral projects that have used different methods from those employed in the two projects were not considered.
Practical implications – Early researchers, in particular students commencing doctorate studies, should apply mixed methods research because it develops skills in the two most dominant data collection methods used in management research. This paper is a practical guide on how this could be done effectively.
Originality/value – The paper is drawn from two unique doctoral research projects. The paper’s originality and value is in providing experiences and practical insights on how mixed methods research is undertaken.